-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(*) use dedicated shm for rate-limiting plugins #3311
Conversation
7f6d0d5
to
a3ff4b2
Compare
kong/constants.lua
Outdated
@@ -70,6 +70,7 @@ return { | |||
DICTS = { | |||
"kong", | |||
"kong_cache", | |||
"kong_rl_counters", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am sure you thought about it a lot, but this feature seems to make rate-limiting plugin a core feature, while it is still a plugin. It feels a bit bad that plugins inject stuff in core. About naming, I'd like kong_rate_limiting
as dict
name better as it pollutes core a bit less.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am sure you thought about it a lot, but this feature seems to make rate-limiting plugin a core feature, while it is still a plugin. It feels a bit bad that plugins inject stuff in core.
Yep. This is also why the fix took so long to appear... We have little choice for now though.
I'd like kong_rate_limiting as dict name better as it pollutes core a bit less.
We can do that. Probably kong_rate_limiting_counter
for completesness's sake then.
@@ -4,7 +4,7 @@ local redis = require "resty.redis" | |||
local policy_cluster = require "kong.plugins.rate-limiting.policies.cluster" | |||
local reports = require "kong.core.reports" | |||
local ngx_log = ngx.log | |||
local shm = ngx.shared.kong_cache | |||
local shm = ngx.shared.kong_rl_counters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other option could be adding a new config
parameter to rate-limiting plugin where you can configure the shm
per plugin configuration. Not sure though, should we still supply a default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With configure_by_lua
, yep! Otherwise, it becomes harder to maintain a custom nginx configuration in sync with the plugins configured in the database...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(That or a more powerful templating system that we do not have and tbh, aren't sure of ever wanting at all, as it would make custom nginx templates harder and harder to maintain...)
a3ff4b2
to
e462456
Compare
e462456
to
d8f3671
Compare
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241
d8f3671
to
ce51957
Compare
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Changes: * remove vendor mlcache.lua file which had received patches for Kong support * add lua-resty-mlcache 2.0.0 as a dependency * implement custom IPC options for the DB mlcache instance Changelog of mlcache 2.0.0: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200
Following this mlcache patch: thibaultcha/lua-resty-mlcache#42 We can now specify a different shm for mlcache to cache L3 misses. This is especially helpful in the context of Kong since client-triggered DB lookups can have a very high cardinality of keys to fetch (e.g. credentials such as API keys) and can make the cache turnover so high that it can be rendered almost useless (filled with misses, thus evicting actual hits from the cache shm). This is considered as a potential attack vector. The size of this shm (12MB) allows for roughly ~45,000 nil sentinel values to be stored in the shm (depending on the size of the keys). This value is aligned with that chosen for the rate-limiting shared dict in PR #3311 (12MB and about ~48,000 simultaneous counters).
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Changes: * remove vendor mlcache.lua file which had received patches for Kong support * add lua-resty-mlcache 2.0.1 as a dependency * implement custom IPC options for the DB mlcache instance Changelog of mlcache 2.0.0: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200 Changelog of mlcache 2.0.1: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
Following this mlcache patch: thibaultcha/lua-resty-mlcache#42 We can now specify a different shm for mlcache to cache L3 misses. This is especially helpful in the context of Kong since client-triggered DB lookups can have a very high cardinality of keys to fetch (e.g. credentials such as API keys) and can make the cache turnover so high that it can be rendered almost useless (filled with misses, thus evicting actual hits from the cache shm). This is considered as a potential attack vector. The size of this shm (12MB) allows for roughly ~45,000 nil sentinel values to be stored in the shm (depending on the size of the keys). This value is aligned with that chosen for the rate-limiting shared dict in PR #3311 (12MB and about ~48,000 simultaneous counters).
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Changes: * remove vendor mlcache.lua file which had received patches for Kong support * add lua-resty-mlcache 2.0.1 as a dependency * implement custom IPC options for the DB mlcache instance Changelog of mlcache 2.0.0: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200 Changelog of mlcache 2.0.1: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
Following this mlcache patch: thibaultcha/lua-resty-mlcache#42 We can now specify a different shm for mlcache to cache L3 misses. This is especially helpful in the context of Kong since client-triggered DB lookups can have a very high cardinality of keys to fetch (e.g. credentials such as API keys) and can make the cache turnover so high that it can be rendered almost useless (filled with misses, thus evicting actual hits from the cache shm). This is considered as a potential attack vector. The size of this shm (12MB) allows for roughly ~45,000 nil sentinel values to be stored in the shm (depending on the size of the keys). This value is aligned with that chosen for the rate-limiting shared dict in PR #3311 (12MB and about ~48,000 simultaneous counters).
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Changes: * remove vendor mlcache.lua file which had received patches for Kong support * add lua-resty-mlcache 2.0.1 as a dependency * implement custom IPC options for the DB mlcache instance Changelog of mlcache 2.0.0: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200 Changelog of mlcache 2.0.1: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
Following this mlcache patch: thibaultcha/lua-resty-mlcache#42 We can now specify a different shm for mlcache to cache L3 misses. This is especially helpful in the context of Kong since client-triggered DB lookups can have a very high cardinality of keys to fetch (e.g. credentials such as API keys) and can make the cache turnover so high that it can be rendered almost useless (filled with misses, thus evicting actual hits from the cache shm). This is considered as a potential attack vector. The size of this shm (12MB) allows for roughly ~45,000 nil sentinel values to be stored in the shm (depending on the size of the keys). This value is aligned with that chosen for the rate-limiting shared dict in PR #3311 (12MB and about ~48,000 simultaneous counters).
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Changes: * remove vendor mlcache.lua file which had received patches for Kong support * add lua-resty-mlcache 2.0.1 as a dependency * implement custom IPC options for the DB mlcache instance Changelog of mlcache 2.0.0: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200 Changelog of mlcache 2.0.1: https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
Following this mlcache patch: thibaultcha/lua-resty-mlcache#42 We can now specify a different shm for mlcache to cache L3 misses. This is especially helpful in the context of Kong since client-triggered DB lookups can have a very high cardinality of keys to fetch (e.g. credentials such as API keys) and can make the cache turnover so high that it can be rendered almost useless (filled with misses, thus evicting actual hits from the cache shm). This is considered as a potential attack vector. The size of this shm (12MB) allows for roughly ~45,000 nil sentinel values to be stored in the shm (depending on the size of the keys). This value is aligned with that chosen for the rate-limiting shared dict in PR #3311 (12MB and about ~48,000 simultaneous counters).
This is part of a series of fixes:
Context
In the
local
mode of the rate-limiting plugins, storing therate-limiting counters in the same shm used by Kong's database cache is
too invasive for the underlying shm, especially when the rate-limiting
plugins are used with a
seconds
precision.On top of exhausting the database cache slots, this approach also
generates some form of fragmentation in the shm. This is due to the
side-by-side storage of values with sizes of different orders of
magnitude (JSON strings vs. an incremented double) and the LRU eviction
mechanism. When the shm is full and LRU kicks-in, it is highly probable
that several rate-limiting counters will be evicted (due to their
proliferation), thus not freeing enough space to store the retrieved
data, causing a
no memory
error to be reported by the shm.Solution
Declaring shms that are only used by some plugins is not very elegant.
Now, all users (even those not using rate-limiting plugins) have to pay
a memory cost (although small).
Unfortunately, and in the absence of a more dynamic solution to shm
configuration such as a more dynamic templating engine, or a
configure_by_lua
phase, this is the safest solution.Size rationale
Running a script generating similar keys and storing similar values
(double) indicates that an shm with 12Mb should be able to store about
~48,000 of those values at once. It is important to remind ourselves
that one Consumer/IP address might use more than one key (in fact, one
per period configured on the plugin), and both the rate-limiting and
response-ratelimiting plugins at once, and they use the same shms.
Even considering the above statements, ~48,000 keys per node seems
somewhat reasonable, considering keys of
second
precision will mostlikely fill up the shm and be candidates for LRU eviction. Our concern
lies instead around long-lived limits (and thus, keys) set by the user.
Additionally, a future improvement upon this will be the setting of the
init_ttl
argument for the rate-limiting keys, which will help quiteconsiderably in reducing the footprint of the plugins on the shm. As
of this day, this feature has been contributed to ngx_lua but not
released yet:
openresty/lua-nginx-module#1226
See also
Another piece of the fixes for the
no memory
errors resides in thebehavior of the database caching module upon a full shm. See:
thibaultcha/lua-resty-mlcache#41
This patch reduces the likeliness of a full shm (by a lot!), but does
not remove it. The above patch ensures a somewhat still sane behavior
would the shm happen to be full again.
Fix #3124
Fix #3241